216 research outputs found

    Psychosocial Outcomes in Long-Term Cochlear Implant Users

    Get PDF
    OBJECTIVES: The objectives of this study were to investigate psychosocial outcomes in a sample of prelingually deaf, early-implanted children, adolescents, and young adults who are long-term cochlear implant (CI) users and to examine the extent to which language and executive functioning predict psychosocial outcomes. DESIGN: Psychosocial outcomes were measured using two well-validated, parent-completed checklists: the Behavior Assessment System for Children and the Conduct Hyperactive Attention Problem Oppositional Symptom. Neurocognitive skills were measured using gold standard, performance-based assessments of language and executive functioning. RESULTS: CI users were at greater risk for clinically significant deficits in areas related to attention, oppositional behavior, hyperactivity-impulsivity, and social-adaptive skills compared with their normal-hearing peers, although the majority of CI users scored within average ranges relative to Behavior Assessment System for Children norms. Regression analyses revealed that language, visual-spatial working memory, and inhibition-concentration skills predicted psychosocial outcomes. CONCLUSIONS: Findings suggest that underlying delays and deficits in language and executive functioning may place some CI users at a risk for difficulties in psychosocial adjustment

    Development of audiovisual comprehension skills in prelingually deaf children with cochlear implants

    Get PDF
    Objective: The present study investigated the development of audiovisual comprehension skills in prelingually deaf children who received cochlear implants. Design: We analyzed results obtained with the Common Phrases (Robbins et al., 1995) test of sentence comprehension from 80 prelingually deaf children with cochlear implants who were enrolled in a longitudinal study, from pre-implantation to 5 years after implantation. Results: The results revealed that prelingually deaf children with cochlear implants performed better under audiovisual (AV) presentation compared with auditory-alone (A-alone) or visual-alone (V-alone) conditions. AV sentence comprehension skills were found to be strongly correlated with several clinical outcome measures of speech perception, speech intelligibility, and language. Finally, pre-implantation V-alone performance on the Common Phrases test was strongly correlated with 3-year postimplantation performance on clinical outcome measures of speech perception, speech intelligibility, and language skills. Conclusions: The results suggest that lipreading skills and AV speech perception reflect a common source of variance associated with the development of phonological processing skills that is shared among a wide range of speech and language outcome measures

    A longitudinal study of audiovisual speech perception by hearing-impaired children with cochlear implants

    Get PDF
    The present study investigated the development of audiovisual speech perception skills in children who are prelingually deaf and received cochlear implants. We analyzed results from the Pediatric Speech Intelligibility (Jerger, Lewis, Hawkins, & Jerger, 1980) test of audiovisual spoken word and sentence recognition skills obtained from a large group of young children with cochlear implants enrolled in a longitudinal study, from pre-implantation to 3 years post-implantation. The results revealed better performance under the audiovisual presentation condition compared with auditory-alone and visual-alone conditions. Performance in all three conditions improved over time following implantation. The results also revealed differential effects of early sensory and linguistic experience. Children from oral communication (OC) education backgrounds performed better overall than children from total communication (TC backgrounds. Finally, children in the early-implanted group performed better than children in the late-implanted group in the auditory-alone presentation condition after 2 years of cochlear implant use, whereas children in the late-implanted group performed better than children in the early-implanted group in the visual-alone condition. The results of the present study suggest that measures of audiovisual speech perception may provide new methods to assess hearing, speech, and language development in young children with cochlear implants

    Non-native listeners' recognition of high-variability speech using PRESTO

    Get PDF
    BACKGROUND: Natural variability in speech is a significant challenge to robust successful spoken word recognition. In everyday listening environments, listeners must quickly adapt and adjust to multiple sources of variability in both the signal and listening environments. High-variability speech may be particularly difficult to understand for non-native listeners, who have less experience with the second language (L2) phonological system and less detailed knowledge of sociolinguistic variation of the L2. PURPOSE: The purpose of this study was to investigate the effects of high-variability sentences on non-native speech recognition and to explore the underlying sources of individual differences in speech recognition abilities of non-native listeners. RESEARCH DESIGN: Participants completed two sentence recognition tasks involving high-variability and low-variability sentences. They also completed a battery of behavioral tasks and self-report questionnaires designed to assess their indexical processing skills, vocabulary knowledge, and several core neurocognitive abilities. STUDY SAMPLE: Native speakers of Mandarin (n = 25) living in the United States recruited from the Indiana University community participated in the current study. A native comparison group consisted of scores obtained from native speakers of English (n = 21) in the Indiana University community taken from an earlier study. DATA COLLECTION AND ANALYSIS: Speech recognition in high-variability listening conditions was assessed with a sentence recognition task using sentences from PRESTO (Perceptually Robust English Sentence Test Open-Set) mixed in 6-talker multitalker babble. Speech recognition in low-variability listening conditions was assessed using sentences from HINT (Hearing In Noise Test) mixed in 6-talker multitalker babble. Indexical processing skills were measured using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Vocabulary knowledge was assessed with the WordFam word familiarity test, and executive functioning was assessed with the BRIEF-A (Behavioral Rating Inventory of Executive Function - Adult Version) self-report questionnaire. Scores from the non-native listeners on behavioral tasks and self-report questionnaires were compared with scores obtained from native listeners tested in a previous study and were examined for individual differences. RESULTS: Non-native keyword recognition scores were significantly lower on PRESTO sentences than on HINT sentences. Non-native listeners' keyword recognition scores were also lower than native listeners' scores on both sentence recognition tasks. Differences in performance on the sentence recognition tasks between non-native and native listeners were larger on PRESTO than on HINT, although group differences varied by signal-to-noise ratio. The non-native and native groups also differed in the ability to categorize talkers by region of origin and in vocabulary knowledge. Individual non-native word recognition accuracy on PRESTO sentences in multitalker babble at more favorable signal-to-noise ratios was found to be related to several BRIEF-A subscales and composite scores. However, non-native performance on PRESTO was not related to regional dialect categorization, talker and gender discrimination, or vocabulary knowledge. CONCLUSIONS: High-variability sentences in multitalker babble were particularly challenging for non-native listeners. Difficulty under high-variability testing conditions was related to lack of experience with the L2, especially L2 sociolinguistic information, compared with native listeners. Individual differences among the non-native listeners were related to weaknesses in core neurocognitive abilities affecting behavioral control in everyday life

    AUDIOVISUAL INTEGRATION OF SPEECH BY CHILDREN AND ADULTS WITH COCHEAR IMPLANTS

    Get PDF
    The present study examined how prelingually deafened children and postlingually deafened adults with cochlear implants (CIs) combine visual speech information with auditory cues. Performance was assessed under auditory-alone (A), visual- alone (V), and combined audiovisual (AV) presentation formats. A measure of visual enhancement, RA, was used to assess the gain in performance provided in the AV condition relative to the maximum possible performance in the auditory-alone format. Word recogniton was highest for AV presentation followed by A and V, respectively. Children who received more visual enhancement also produced more intelligible speech. Adults with CIs made better use of visual information in more difficult listening conditions (e.g., when mutiple talkers or phonemically similar words were used). The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech

    Novel Scientific Evidence of Intoxication: Acoustic Analysis of Voice Recordings from the Exxon Valdez

    Get PDF
    Part of this article reports original research conducted under the direction of the second and third authors. The initial re search was supported by a contract to Indiana University from General Motors Research Laboratories. The specific analyses of voice recordings of Captain Joseph Hazelwood were conducted by them at the re quest of the National Transportation Safety Board, and are based on tapes and data supplied by the NTSB. The second author may be called as a witness in some of the lawsuits pending against the Exxon Corporation. The opinions expressed in this article concerning whether this evidence meets the legal standards of reliability and admissibility are those of the first author, who is not affiliated with the Speech Research Laboratory and has not participated in either the initial research nor the analysis of the Exxon Valdez tapes

    Novel Scientific Evidence of Intoxication: Acoustic Analysis of Voice Recordings from the Exxon Valdez

    Get PDF
    Part of this article reports original research conducted under the direction of the second and third authors. The initial re search was supported by a contract to Indiana University from General Motors Research Laboratories. The specific analyses of voice recordings of Captain Joseph Hazelwood were conducted by them at the re quest of the National Transportation Safety Board, and are based on tapes and data supplied by the NTSB. The second author may be called as a witness in some of the lawsuits pending against the Exxon Corporation. The opinions expressed in this article concerning whether this evidence meets the legal standards of reliability and admissibility are those of the first author, who is not affiliated with the Speech Research Laboratory and has not participated in either the initial research nor the analysis of the Exxon Valdez tapes

    Long-term musical experience and auditory and visual perceptual abilities under adverse conditions

    Get PDF
    Musicians have been shown to have enhanced speech perception in noise skills. It is unclear whether these improvements are limited to the auditory modality, as no research has examined musicians' visual perceptual abilities under degraded conditions. The current study examined associations between long-term musical experience and visual perception under noisy or degraded conditions. The performance of 11 musicians and 11 age-matched nonmusicians was compared on several auditory and visual perceptions in noise measures. Auditory perception tests included speech-in-noise tests and an environmental sound in noise test. Visual perception tasks included a fragmented sentences task, an object recognition task, and a lip-reading measure. Participants' vocabulary knowledge and nonverbal reasoning abilities were also assessed. Musicians outperformed nonmusicians on the speech perception in noise measures as well as the visual fragmented sentences task. Musicians also displayed better vocabulary knowledge in comparison to nonmusicians. Associations were found between perception of speech and visually degraded text. The findings show that long-term musical experience is associated with modality-general improvements in perceptual abilities. Possible systems supporting musicians' perceptual abilities are discussed

    Verbal Processing Speed and Executive Functioning in Long-Term Cochlear Implant Users

    Get PDF
    Purpose: The purpose of this study was to report how verbal rehearsal speed (VRS), a form of covert speech used to maintain verbal information in working memory, and another verbal processing speed measure, perceptual encoding speed, are related to 3 domains of executive function (EF) at risk in cochlear implant (CI) users: verbal working memory, fluency-speed, and inhibition-concentration. Method: EF, speech perception, and language outcome measures were obtained from 55 prelingually deaf, long-term CI users and matched controls with normal hearing (NH controls). Correlational analyses were used to assess relations between VRS (articulation rate), perceptual encoding speed (digit and color naming), and the outcomes in each sample. Results: CI users displayed slower verbal processing speeds than NH controls. Verbal rehearsal speed was related to 2 EF domains in the NH sample but was unrelated to EF outcomes in CI users. Perceptual encoding speed was related to all EF domains in both groups. Conclusions: Verbal rehearsal speed may be less influential for EF quality in CI users than for NH controls, whereas rapid automatized labeling skills and EF are closely related in both groups. CI users may develop processing strategies in EF tasks that differ from the covert speech strategies routinely employed by NH individuals
    • …
    corecore